在临床实践中,在区分从非转移的中转移时,放射科学家在依赖于病变尺寸。病变尺寸的先决条件是它们的检测,因为它促进了对肿瘤的下游评估。然而,病变在CT扫描中的大小和外观变化,并且放射科医师通常会错过小型病变繁忙的临床日。为了克服这些挑战,我们提出了使用最先进的检测神经网络,以向NIH Deepleion数据集中存在的可疑病变进行尺寸。此外,我们合并了界定盒融合技术,以最大限度地减少假阳性(FP)并提高检测精度。最后,ToreSemble临床用途,我们构建了一个最佳检测模型的集合,以定位损伤,以精确度为65.17%,灵敏度为91.67%,每张图片4 fp。我们的结果改善了当前最先进方法的性能,以便在挑战CT扫描中进行病变检测。
translated by 谷歌翻译
T2磁共振成像(MRI)中淋巴结(LN)的鉴定是放射科在评估淋巴抑制性疾病期间的重要步骤。节点的大小在其分期中发挥着至关重要的作用,并且放射科有时有时使用额外的对比度序列,例如扩散加权成像(DWI)进行确认。然而,淋巴结在T2 MRI扫描中具有多样化的外观,使得转移的阶段难以实现。此外,放射科医师通常会在繁忙的一天中错过较小的转移性淋巴结。要处理这些问题,我们建议使用检测变压器(DETR)网络本地化可疑转移性淋巴结,用于挑战不同扫描仪和考试协议获得的T2 MRI扫描。通过边界盒融合技术降低了误报(FP),并且达到了每张图像4 FP的65.41 \%的精确度和91.66 \%。据我们所知,我们的结果改善了T2 MRI扫描中的目前的淋巴结检测最先进的淋巴结检测。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is considered one of the primary concerns due to its effect on vision loss among most people with diabetes globally. The severity of DR is mostly comprehended manually by ophthalmologists from fundus photography-based retina images. This paper deals with an automated understanding of the severity stages of DR. In the literature, researchers have focused on this automation using traditional machine learning-based algorithms and convolutional architectures. However, the past works hardly focused on essential parts of the retinal image to improve the model performance. In this paper, we adopt transformer-based learning models to capture the crucial features of retinal images to understand DR severity better. We work with ensembling image transformers, where we adopt four models, namely ViT (Vision Transformer), BEiT (Bidirectional Encoder representation for image Transformer), CaiT (Class-Attention in Image Transformers), and DeiT (Data efficient image Transformers), to infer the degree of DR severity from fundus photographs. For experiments, we used the publicly available APTOS-2019 blindness detection dataset, where the performances of the transformer-based models were quite encouraging.
translated by 谷歌翻译
Due to the high activation sparsity and use of accumulates (AC) instead of expensive multiply-and-accumulates (MAC), neuromorphic spiking neural networks (SNNs) have emerged as a promising low-power alternative to traditional DNNs for several computer vision (CV) applications. However, most existing SNNs require multiple time steps for acceptable inference accuracy, hindering real-time deployment and increasing spiking activity and, consequently, energy consumption. Recent works proposed direct encoding that directly feeds the analog pixel values in the first layer of the SNN in order to significantly reduce the number of time steps. Although the overhead for the first layer MACs with direct encoding is negligible for deep SNNs and the CV processing is efficient using SNNs, the data transfer between the image sensors and the downstream processing costs significant bandwidth and may dominate the total energy. To mitigate this concern, we propose an in-sensor computing hardware-software co-design framework for SNNs targeting image recognition tasks. Our approach reduces the bandwidth between sensing and processing by 12-96x and the resulting total energy by 2.32x compared to traditional CV processing, with a 3.8% reduction in accuracy on ImageNet.
translated by 谷歌翻译
Spatial understanding is a fundamental aspect of computer vision and integral for human-level reasoning about images, making it an important component for grounded language understanding. While recent large-scale text-to-image synthesis (T2I) models have shown unprecedented improvements in photorealism, it is unclear whether they have reliable spatial understanding capabilities. We investigate the ability of T2I models to generate correct spatial relationships among objects and present VISOR, an evaluation metric that captures how accurately the spatial relationship described in text is generated in the image. To benchmark existing models, we introduce a large-scale challenge dataset SR2D that contains sentences describing two objects and the spatial relationship between them. We construct and harness an automated evaluation pipeline that employs computer vision to recognize objects and their spatial relationships, and we employ it in a large-scale evaluation of T2I models. Our experiments reveal a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. Our analyses demonstrate several biases and artifacts of T2I models such as the difficulty with generating multiple objects, a bias towards generating the first object mentioned, spatially inconsistent outputs for equivalent relationships, and a correlation between object co-occurrence and spatial understanding capabilities. We conduct a human study that shows the alignment between VISOR and human judgment about spatial understanding. We offer the SR2D dataset and the VISOR metric to the community in support of T2I spatial reasoning research.
translated by 谷歌翻译
This paper creates a novel method of deep neural style transfer by generating style images from freeform user text input. The language model and style transfer model form a seamless pipeline that can create output images with similar losses and improved quality when compared to baseline style transfer methods. The language model returns a closely matching image given a style text and description input, which is then passed to the style transfer model with an input content image to create a final output. A proof-of-concept tool is also developed to integrate the models and demonstrate the effectiveness of deep image style transfer from freeform text.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
translated by 谷歌翻译
从废物电气和电子设备(WEEE)中有效拆卸和回收材料是将全球供应链从碳密集型,采矿材料转移到可回收和可再生的材料的关键步骤。常规的回收过程依赖于切碎和分类废物流,但是对于由许多不同材料组成的Weee,我们探索了针对许多物体的靶向拆卸,以改善材料恢复。许多WEEE对象都共享许多关键特征,因此看起来非常相似,但是它们的材料组成和内部组件布局可能会有所不同,因此,对于随后的拆卸步骤,为准确的材料分离和恢复而具有准确的分类器至关重要。这项工作介绍了RGB-X(一种多模式图像分类方法),该方法利用了来自外部RGB图像的关键特征,并从X射线图像中生成的图像来准确地对电子对象进行分类。更具体地说,这项工作开发了迭代类激活映射(ICAM),这是一种新型的网络体系结构,明确地侧重于用于准确的电子对象分类所需的多模式特征映射中的细节。为了培训分类器,由于费用和需要专家指导,电子对象缺乏大型且注释良好的X射线数据集。为了克服这个问题,我们提出了一种新的方法,可以使用应用于X射线域的域随机化创建合成数据集。合并的RGB-X方法使我们在10代现代智能手机上的准确度为98.6%,其单独的精度为89.1%(RGB)和97.9%(X射线)。我们提供实验结果3来证实我们的结果。
translated by 谷歌翻译